24 research outputs found

    Rede neural convolucional eficiente para detecção e contagem dos glóbulos sanguíneos

    Get PDF
    Blood cell analysis is an important part of the health and immunity assessment. There are three major components of the blood: red blood cells, white blood cells, and platelets. The count and density of these blood cells are used to find multiple disorders like blood infections (anemia, leukemia, among others). Traditional methods are time-consuming, and the test cost is high. Thus, it arises the need for automated methods that can detect different kinds of blood cells and count the number of cells. A convolutional neural network-based framework is proposed for detecting and counting the cells. The neural network is trained for the multiple iterations, and a model having lower validation loss is saved. The experiments are done to analyze the performance of the detection system and results with high accuracy in the counting of the cells. The mean average precision is achieved when compared to ground truth provided to respective labels. The value of the average precision is found to be ranging from 70% to 99.1%, with a mean average precision value of 85.35%. The proposed framework had much less time complexity: it took only 0.111 seconds to process an image frame with dimensions of 640×480 pixels. The system can also be implemented in low-cost, single-board computers for rapid prototyping. The efficiency of the proposed framework to identify and count different blood cells can be utilized to assist medical professionals in finding disorders and making decisions based on the obtained report.El análisis de células sanguíneas es una parte importante de la evaluación de la salud y la inmunidad. Hay tres componentes principales de los glóbulos rojos, los glóbulos blancos y las plaquetas. El recuento y la densidad de estas células sanguíneas se utilizan para encontrar múltiples trastornos como infecciones de la sangre como anemia, leucemia, etc. Los métodos tradicionales consumen mucho tiempo y el costo de las pruebas es alto. Por tanto, surge la necesidad de métodos automatizados que puedan detectar diferentes tipos de células sanguíneas y contar el número de células. Se propone un marco basado en una red neuronal convolucional para la detección y el recuento de las células. La red neuronal se entrena para las múltiples iteraciones y se guarda un modelo que tiene una menor pérdida de validación. Los experimentos se realizan con el fin de analizar el rendimiento del sistema de detección y los resultados con alta precisión en el recuento de células. La precisión promedio se logra al analizar las respectivas etiquetas que hay en la imagen. Se ha determinado que el valor de la precisión promedio, oscila entre el 70% y el 99,1% con un valor medio de 85,35%. El coste computacional de la propuesta fue de 0.111 segundos, procesar una imagen con dimensiones de 640 × 480 píxeles. El sistema también se puede implementar en ordenadores con CPU de bajo costo, para la creación rápida de prototipos. La eficiencia de la propuesta, para identificar y contar diferentes células sanguíneas, se puede utilizar para ayudar a los profesionales médicos a encontrar los trastornos y la toma decisiones, a partir de la identificación automática.O exame de células sanguíneas é uma parte importante da avaliação de saúde e imunidade. Há três componentes principais dos glóbulos vermelhos, glóbulos brancos e plaquetas. A contagem e a densidade dessas células sanguíneas são usadas para encontrar múltiplos distúrbios, tais como infecções no sangue: anemia, leucemia, etc. Os métodos tradicionais são demorados e o custo dos testes é alto. Portanto, surge a necessidade de métodos automatizados que possam detectar diferentes tipos de células sanguíneas e contar o número de células. É proposta uma estrutura baseada em rede neural convolucional para a detecção e contagem de células. A rede neural é treinada para múltiplas iterações e é salvo um modelo que tem uma menor perda de validação. São realizados experimentos para analisar o desempenho do sistema de detecção e os resultados com alta precisão na contagem de células. A precisão média é obtida analisando os respectivos rótulos na imagem. Foi determinado que o valor médio de precisão oscila entre 70 % e 99,1 % com um valor médio de 85,35 %. O custo computacional da proposta foi de 0,111 segundos, processando uma imagem com dimensões de 640 × 480 pixels. O sistema também pode ser implementado em computadores com CPUs de baixo custo para prototipagem rápida. A eficiência da proposta, para identificar e contar diferentes células sanguíneas, pode ser usada para ajudar os profissionais médicos a encontrar distúrbios e tomar decisões, com base na identificação automática

    AI-CardioCare: Artificial Intelligence Based Device for Cardiac Health Monitoring

    Get PDF

    Optimized High Resolution 3D Dense-U-Net Network for Brain and Spine Segmentation

    Get PDF
    The 3D image segmentation is the process of partitioning a digital 3D volumes into multiple segments. This paper presents a fully automatic method for high resolution 3D volumetric segmentation of medical image data using modern supervised deep learning approach. We introduce 3D Dense-U-Net neural network architecture implementing densely connected layers. It has been optimized for graphic process unit accelerated high resolution image processing on currently available hardware (Nvidia GTX 1080ti). The method has been evaluated on MRI brain 3D volumetric dataset and CT thoracic scan dataset for spine segmentation. In contrast with many previous methods, our approach is capable of precise segmentation of the input image data in the original resolution, without any pre-processing of the input image. It can process image data in 3D and has achieved accuracy of 99.72% on MRI brain dataset, which outperformed results achieved by human expert. On lumbar and thoracic vertebrae CT dataset it has achieved the accuracy of 99.80%. The architecture proposed in this paper can also be easily applied to any task already using U-Net network as a segmentation algorithm to enhance its results. Complete source code was released online under open-source license

    High Embedding Capacity and Robust Audio Watermarking for Secure Transmission Using Tamper Detection

    No full text
    Robustness, payload, and imperceptibility of audio watermarking algorithms are contradictory design issues with high‐level security of the watermark. In this study, the major issue in achieving high payload along with adequate robustness against challenging signal‐processing attacks is addressed. Moreover, a security code has been strategically used for secure transmission of data, providing tamper detection at the receiver end. The high watermark payload in this work has been achieved by using the complementary features of third‐level detailed coefficients of discrete wavelet transform where the human auditory system is not sensitive to alterations in the audio signal. To counter the watermark loss under challenging attacks at high payload, Daubechies wavelets that have an orthogonal property and provide smoother frequencies have been used, which can protect the data from loss under signal‐processing attacks. Experimental results indicate that the proposed algorithm has demonstrated adequate robustness against signal processing attacks at 4,884.1 bps. Among the evaluators, 87% have rated the proposed algorithm to be remarkable in terms of transparency

    Nonlocal mode-coupling interactions and phase transition near tricriticality

    No full text
    Employing Wilson's renormalization group scheme, we investigate the critical behaviour of a modified Ginzburg-Landau model with a nonlocal mode-coupling interaction in the quartic term. Carrying out the calculations at one-loop order, we obtain the critical exponents in the leading order of ϵ=4d2ρ\epsilon=4-d-2\rho , where ρ is an exponent occurring in the nonlocal interaction term and d is the space dimension. Interestingly, the correlation exponent η is found to be non-zero at one-loop order and the ϵ expansion corresponds to an expansion about the tricritical mean-field theory in three dimensions, unlike the conventional Φ4\Phi^4 theory. The ensuing critical exponents are in good agreement with experimental values for samples close to tricriticality. Our analysis indicates that tricriticality is a feature only in three dimensions

    Biometric data security using joint encryption and watermarking

    No full text

    Multi–GPU Implementation of Machine Learning Algorithm using CUDA and OpenCL

    No full text
    Using modern Graphic Processing Units (GPUs) becomes very useful for computing complex and time consuming processes. GPUs provide high–performance computation capabilities with a good price. This paper deals with a multi–GPU OpenCL and CUDA implementations of k–Nearest Neighbor (k–NN) algorithm. This work compares performances of OpenCLand CUDA implementations where each of them is suitable for different number of used attributes. The proposed CUDA algorithm achieves acceleration up to 880x in comparison witha single thread CPU version. The common k-NN was modified to be faster when the lower number of k neighbors is set. The performance of algorithm was verified with two GPUs dual-core NVIDIA GeForce GTX 690 and CPU Intel Core i7 3770 with 4.1 GHz frequency. The results of speed up were measured for one GPU, two GPUs, three and four GPUs. We performed several tests with data sets containing up to 4 million elements with various number of attributes
    corecore